725 research outputs found
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
We study the necessary and sufficient complexity of ReLU neural networks---in
terms of depth and number of weights---which is required for approximating
classifier functions in . As a model class, we consider the set
of possibly discontinuous piecewise
functions , where the different smooth regions
of are separated by hypersurfaces. For dimension ,
regularity , and accuracy , we construct artificial
neural networks with ReLU activation function that approximate functions from
up to error of . The
constructed networks have a fixed number of layers, depending only on and
, and they have many nonzero weights,
which we prove to be optimal. In addition to the optimality in terms of the
number of weights, we show that in order to achieve the optimal approximation
rate, one needs ReLU networks of a certain depth. Precisely, for piecewise
functions, this minimal depth is given---up to a
multiplicative constant---by . Up to a log factor, our constructed
networks match this bound. This partly explains the benefits of depth for ReLU
networks by showing that deep networks are necessary to achieve efficient
approximation of (piecewise) smooth functions. Finally, we analyze
approximation in high-dimensional spaces where the function to be
approximated can be factorized into a smooth dimension reducing feature map
and classifier function ---defined on a low-dimensional feature
space---as . We show that in this case the approximation rate
depends only on the dimension of the feature space and not the input dimension.Comment: Generalized some estimates to norms for $0<p<\infty
Approximation in with deep ReLU neural networks
We discuss the expressive power of neural networks which use the non-smooth
ReLU activation function by analyzing the
approximation theoretic properties of such networks. The existing results
mainly fall into two categories: approximation using ReLU networks with a fixed
depth, or using ReLU networks whose depth increases with the approximation
accuracy. After reviewing these findings, we show that the results concerning
networks with fixed depth--- which up to now only consider approximation in
for the Lebesgue measure --- can be generalized to
approximation in , for any finite Borel measure . In particular,
the generalized results apply in the usual setting of statistical learning
theory, where one is interested in approximation in , with the
probability measure describing the distribution of the data.Comment: Accepted for presentation at SampTA 201
On Approximate Nonlinear Gaussian Message Passing On Factor Graphs
Factor graphs have recently gained increasing attention as a unified
framework for representing and constructing algorithms for signal processing,
estimation, and control. One capability that does not seem to be well explored
within the factor graph tool kit is the ability to handle deterministic
nonlinear transformations, such as those occurring in nonlinear filtering and
smoothing problems, using tabulated message passing rules. In this
contribution, we provide general forward (filtering) and backward (smoothing)
approximate Gaussian message passing rules for deterministic nonlinear
transformation nodes in arbitrary factor graphs fulfilling a Markov property,
based on numerical quadrature procedures for the forward pass and a
Rauch-Tung-Striebel-type approximation of the backward pass. These message
passing rules can be employed for deriving many algorithms for solving
nonlinear problems using factor graphs, as is illustrated by the proposition of
a nonlinear modified Bryson-Frazier (MBF) smoother based on the presented
message passing rules
- …